647 research outputs found

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy

    An Interpolated Dynamic Navigation Function

    Get PDF
    The E-star algorithm is a path planning method capable of dynamic replanning and user-configurable path cost interpolation. It calculates a navigation function as a sampling of an underlying smooth goal distance that takes into account a continuous notion of risk that can be controlled in a fine-grained manner. E-star results in more appropriate paths during gradient descent. Dynamic replanning means that changes in the environment model can be repaired to avoid the expenses of complete replanning. This helps compensating for the increased computational effort required for interpolation. We present the theoretical basis and a working implementation, as well as measurements of the algorithm\'s precision, topological correctness, and computational effort

    Medial Features for Superpixel Segmentation

    Get PDF
    Image segmentation plays an important role in computer vision and human scene perception. Image oversegmentation is a common technique to overcome the problem of managing the high number of pixels and the reasoning among them. Specifically, a local and coherent cluster that contains a statistically homogeneous region is denoted as a superpixel. In this paper we propose a novel algorithm that segments an image into superpixels employing a new kind of shape centered feature which serve as a seed points for image segmentation, based on Gradient Vector Flow fields (GVF) [14]. The features are located at image locations with salient symmetry. We compare our algorithm to state-of-the-art superpixel algorithms and demonstrate a performance increase on the standard Berkeley Segmentation Dataset

    Inertial and 3D-odometry fusion in rough terrain Towards real 3D navigation

    Get PDF
    Many algorithms related to localization need good pose prediction in order to produce accurate results. This is especially the case for data association algorithms, where false feature matches can lead to the localization system failure. In rough terrain, the field of view can vary significantly between two feature extraction steps, so a good position prediction is necessary to robustly track features. This paper presents a method for combining dead reckoning sensor information in order to provide an initial estimate of the six degrees of freedom of a rough terrain rover. An inertial navigation system (INS) and the wheel encoders are used as sensory inputs. The sensor fusion scheme is based on an extended information filter (EIF) and is extensible to any kind and number of sensors. In order to test the system, the rover has been driven on different kind of obstacles while computing both pure 3D-odometric and fused INS/3D-odometry trajectories. The results show that the use of the INS significantly improves the pose prediction

    3D-Odometry for rough terrain - Towards real 3D navigation

    Get PDF
    Up to recently autonomous mobile robots were mostly designed to run within an indoor, yet partly structured and flat, environment. In rough terrain many problems arise and position tracking becomes more difficult. The robot has to deal with wheel slippage and large orientation changes. In this paper we will first present the recent developments on the off-road rover Shrimp. Then a new method, called 3D-Odometry, which extends the standard 2D odometry to the 3D space will be developed. Since it accounts for transitions, the 3D-Odometry provides better position estimates. It will certainly help to go towards real 3D navigation for outdoor robots

    3D Position Tracking in Challenging Terrain

    Get PDF
    The intent of this paper is to show how the accuracy of 3D position tracking can be improved by considering rover locomotion in rough terrain as a holistic problem. An appropriate locomotion concept endowed with a controller min- imizing slip improves the climbing performance, the accuracy of odometry and the signal/noise ratio of the onboard sensors. Sensor fusion involving an inertial mea- surement unit, 3D-Odometry, and visual motion estimation is presented. The exper- imental results show clearly how each sensor contributes to increase the accuracy of the 3D pose estimation in rough terrain

    EKF-based 3D SLAM for Structured Environment Reconstruction

    Get PDF
    This paper presents the extension and experimental validation of the widely used EKF1-based SLAM2 algorithm to 3D space. It uses planar features extracted probabilistically from dense three-dimensional point clouds generated by a rotating 2D laser scanner. These features are represented in compliance with the Symmetries and Perturbation model (SPmodel) in a stochastic map. As the robot moves, this map is updated incrementally while its pose is tracked by using an Extended Kalman Filter. After showing how three-dimensional data can be generated, the probabilistic feature extraction method is described, capable of robustly extracting (infinite) planes from structured environments. The SLAM algorithm is then used to track a robot moving through an indoor environment and its capabilities in terms of 3D reconstruction are analyzed
    • …
    corecore